29 research outputs found
Nonanticipating estimation applied to sequential analysis and changepoint detection
Suppose a process yields independent observations whose distributions belong
to a family parameterized by \theta\in\Theta. When the process is in control,
the observations are i.i.d. with a known parameter value \theta_0. When the
process is out of control, the parameter changes. We apply an idea of Robbins
and Siegmund [Proc. Sixth Berkeley Symp. Math. Statist. Probab. 4 (1972) 37-41]
to construct a class of sequential tests and detection schemes whereby the
unknown post-change parameters are estimated. This approach is especially
useful in situations where the parametric space is intricate and mixture-type
rules are operationally or conceptually difficult to formulate. We exemplify
our approach by applying it to the problem of detecting a change in the shape
parameter of a Gamma distribution, in both a univariate and a multivariate
setting.Comment: Published at http://dx.doi.org/10.1214/009053605000000183 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Extreme(ly) mean(ingful): Sequential formation of a quality group
The present paper studies the limiting behavior of the average score of a
sequentially selected group of items or individuals, the underlying
distribution of which, , belongs to the Gumbel domain of attraction of
extreme value distributions. This class contains the Normal, Lognormal, Gamma,
Weibull and many other distributions. The selection rules are the "better than
average" () and the "-better than average" rule, defined as
follows. After the first item is selected, another item is admitted into the
group if and only if its score is greater than times the average score
of those already selected. Denote by the average of the first
selected items, and by the time it takes to amass them. Some of the key
results obtained are: under mild conditions, for the better than average rule,
less a suitable chosen function of converges almost surely
to a finite random variable. When ,
and , then
is of approximate order . When , the asymptotic results for
are of a completely different order of magnitude. Interestingly,
for a class of distributions, , suitably normalized, asymptotically
approaches 1, almost surely for relatively small , in probability
for moderate sized and in distribution when is large.Comment: Published in at http://dx.doi.org/10.1214/10-AAP684 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Sequential Change-Point Detection Procedures That are Nearly Optimal and Computationally Simple
Sequential schemes for detecting a change in distribution often require that all of the observations be stored in memory. Lai (1995, Journal of Royal Statistical Society, Series B 57 : 613 β 658) proposed a class of detection schemes that enable one to retain a finite window of the most recent observations, yet promise first-order optimality. The asymptotics are such that the window size is asymptotically unbounded. We argue that what's of computational importance isn't having a finite window of observations, but rather making do with a finite number of registers. We illustrate in the context of detecting a change in the parameter of an exponential family that one can achieve eventually even second-order asymptotic optimality through using only three registers for storing information of the past. We propose a very simple procedure, and show by simulation that it is highly efficient for typical applications
Sequential Change-Point Detection Procedures That are Nearly Optimal and Computationally Simple
Sequential schemes for detecting a change in distribution often require that all of the observations be stored in memory. Lai (1995, Journal of Royal Statistical Society, Series B 57 : 613 β 658) proposed a class of detection schemes that enable one to retain a finite window of the most recent observations, yet promise first-order optimality. The asymptotics are such that the window size is asymptotically unbounded. We argue that what's of computational importance isn't having a finite window of observations, but rather making do with a finite number of registers. We illustrate in the context of detecting a change in the parameter of an exponential family that one can achieve eventually even second-order asymptotic optimality through using only three registers for storing information of the past. We propose a very simple procedure, and show by simulation that it is highly efficient for typical applications
Abstract
We consider the first exit time of a nonnegative Harris-recurrent Markov process from the interval [0, A] as A β β. We provide an alternative method of proof of asymptotic exponentiality of the first exit time (suitably standardized) that does not rely on embedding in a regeneration process. We show that under certain conditions the moment generating function of a suitably standardized version of the first exit time converges to that of Exponential(1), and we connect between the standardizing constant and the quasi-stationary distribution (assuming it exists). The results are applied to the evaluation of a distribution of run length to false alarm in change-point detection problems